524,538 research outputs found

    Network as a computer: ranking paths to find flows

    Full text link
    We explore a simple mathematical model of network computation, based on Markov chains. Similar models apply to a broad range of computational phenomena, arising in networks of computers, as well as in genetic, and neural nets, in social networks, and so on. The main problem of interaction with such spontaneously evolving computational systems is that the data are not uniformly structured. An interesting approach is to try to extract the semantical content of the data from their distribution among the nodes. A concept is then identified by finding the community of nodes that share it. The task of data structuring is thus reduced to the task of finding the network communities, as groups of nodes that together perform some non-local data processing. Towards this goal, we extend the ranking methods from nodes to paths. This allows us to extract some information about the likely flow biases from the available static information about the network.Comment: 12 pages, CSR 200

    The NASA/industry Design Analysis Methods for Vibrations (DAMVIBS) program: Boeing Helicopters airframe finite element modeling

    Get PDF
    Mathematical models based on the finite element method of structural analysis, as embodied in the NASTRAN computer code, are routinely used by the helicopter industry to calculate airframe static internal loads used for sizing structural members. Historically, less reliance has been placed on the vibration predictions based on these models. Beginning in the early 1980's NASA's Langley Research Center initiated an industry wide program with the objective of engendering the needed trust in vibration predictions using these models and establishing a body of modeling guides which would enable confident future prediction of airframe vibration as part of the regular design process. Emphasis in this paper is placed on the successful modeling of the Army/Boeing CH-47D which showed reasonable correlation with test data. A principal finding indicates that improved dynamic analysis requires greater attention to detail and perhaps a finer mesh, especially the mass distribution, than the usual stress model. Post program modeling efforts show improved correlation placing key modal frequencies in the b/rev range with 4 percent of the test frequencies

    Generating Mathematical Derivations with Large Language Models

    Full text link
    The derivation of mathematical results in specialised fields using Large Language Models (LLMs) is an emerging research direction that can help identify models' limitations, and potentially support mathematical discovery. In this paper, we leverage a symbolic engine to generate derivations of equations at scale, and investigate the capabilities of LLMs when deriving goal equations from premises. Specifically, we employ in-context learning for GPT and fine-tune a range of T5 models to compare the robustness and generalisation of pre-training strategies to specialised models. Empirical results show that fine-tuned FLAN-T5-large (MathT5) outperforms GPT models on all static and out-of-distribution test sets in terms of absolute performance. However, an in-depth analysis reveals that the fine-tuned models are more sensitive to perturbations involving unseen symbols and (to a lesser extent) changes to equation structure. In addition, we analyse 1.7K equations and over 200 derivations to highlight common reasoning errors such as the inclusion of incorrect, irrelevant, and redundant equations, along with the tendency to skip derivation steps. Finally, we explore the suitability of existing metrics for evaluating mathematical derivations finding evidence that, while they capture general properties such as sensitivity to perturbations, they fail to highlight fine-grained reasoning errors and essential differences between models. Overall, this work demonstrates that training models on synthetic data can improve their mathematical capabilities beyond larger architectures.Comment: 13 page

    Harder, better, faster, stronger: understanding and improving the tractability of large energy system models

    Full text link
    Energy system models based on linear programming have been growing in size with the increasing need to model renewables with high spatial and temporal detail. Larger models lead to high computational requirements. Furthermore, seemingly small changes in a model can lead to drastic differences in runtime. Here, we investigate measures to address this issue. We review the mathematical structure of a typical energy system model, and discuss issues of sparsity, degeneracy and large numerical range. We introduce and test a method to automatically scale models to improve numerical range. We test this method as well as tweaks to model formulation and solver preferences, finding that adjustments can have a substantial impact on runtime. In particular, the barrier method without crossover can be very fast, but affects the structure of the resulting optimal solution. We conclude with a range of recommendations for energy system modellers

    Marginal empirical likelihood and sure independence feature screening

    Full text link
    We study a marginal empirical likelihood approach in scenarios when the number of variables grows exponentially with the sample size. The marginal empirical likelihood ratios as functions of the parameters of interest are systematically examined, and we find that the marginal empirical likelihood ratio evaluated at zero can be used to differentiate whether an explanatory variable is contributing to a response variable or not. Based on this finding, we propose a unified feature screening procedure for linear models and the generalized linear models. Different from most existing feature screening approaches that rely on the magnitudes of some marginal estimators to identify true signals, the proposed screening approach is capable of further incorporating the level of uncertainties of such estimators. Such a merit inherits the self-studentization property of the empirical likelihood approach, and extends the insights of existing feature screening methods. Moreover, we show that our screening approach is less restrictive to distributional assumptions, and can be conveniently adapted to be applied in a broad range of scenarios such as models specified using general moment conditions. Our theoretical results and extensive numerical examples by simulations and data analysis demonstrate the merits of the marginal empirical likelihood approach.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1139 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Evaluation of methods to account for release from nanofiber scaffolds

    Get PDF
    Electrospinning is a common technique utilized to form fibers from the micro- to nanometer range. Nanofibers form through electrospinning can be utilized as scaffolds since the fiber structures are similar to the structures within the extracellular matrix. Researchers use additives, such as growth factors, to help facilitate cell proliferation and function. Also, researchers are attempting to use electrospun fibers for drug delivery and as wound dressings since the electrospun fibers have high surface area to volume ratio. In both situations, the release of either the additive or the drug needs to be controlled so that the fibers would release the additive or drug in a desired manner. To understand the release from the electrospun fibers, researchers develop mathematical models that rely on the release data. Additionally, researchers utilize models based on Fick\u27s second law of diffusion to predict release in cylindrical coordinates. This work aims to understand the release from electrospun fibers by finding the relationship between Fick\u27s second law of diffusion and the mathematical models from experimental data. Three different release studies for electrospun fibers are investigated. Predicted mutual diffusion coefficients are developed so that the coefficients could be used for future predictive releases

    Stochastic processes for modelling bridge deterioration

    Full text link
    Traditionally, bridge management systems were designed using Markov chain models. Recently, researchers applied the gamma process successfully to structural deterioration problems. The stochastic process captures the temporal variability of degradation, and has been applied to a range of problems in structures. We report on a study for the modelling of the condition of bridges in the state of NSW. The study encompasses large amounts of data spanning more than 15 years. We argue for the applicability of the gamma process and other stochastic processes. While the gamma process has been adopted in the past decade on grounds of mathematical tractability and physical motivation, we also observe another distribution for the deterioration at different times. The finding promotes the stochastic process modelling direction taken in the past decade and brings forth new models for the time-dependent reliability analysis of bridges. © 2009 Taylor & Francis Group, London
    • …
    corecore